41 - Beyond the Patterns - David B. Lindell & Julien Martel (Stanford U): Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals [ID:36027]
50 von 963 angezeigt

Welcome everybody to a new episode of Beyond the Patterns and today I have the great pleasure

to introduce David B. Lindell and Julian Martel from Stanford Engineering in our short video

here.

So, both of them are from Stanford Engineering.

Julian Martel is a postdoctoral research fellow at Stanford University in computational imaging

lab led by Gordon Wetzstein.

His research interests are in unconventional visual sensing and processing.

More specifically his current topics of research include the co-design of hardware and algorithms

for visual sensing, the design of methods for vision sensors with in-pixel computing

capabilities and the use of novel representations for visual data such as neural implicit representations.

David B. Lindell is also a postdoctoral scholar in the department of electrical engineering

in Stanford University.

His research interests are in the areas of computational imaging, machine learning and

remote sensing.

Most recently he has worked on developing neural representations for applications in

vision and rendering.

He has also developed advanced 3D imaging systems to capture objects hidden around corners or

through scattering media.

So it's a great pleasure to have both of them here.

And today their presentation will be entitled Implicit Neural Representation Networks for

Fitting Signals, Derivatives and Integrals.

So I'm very much looking forward to your presentation and the stage is yours.

Thanks very much for that introduction and again thank you for the invitation.

We're delighted to be here.

Julian and I will be sharing this presentation and as I understand there will be a chance

to engage in some Q&A afterwards so we're looking forward to that as well.

So one thing we sometimes take for granted is the way that we work with signals and this

talk is about an emerging way to represent signals as the output of simple neural networks.

And we'll go over some of our recent work on neural representations and coordinate based

networks to represent signals, their derivatives as well as integrals.

And so how we represent signals can actually have a pretty large impact on how we solve

problems and the types of algorithms that we use.

And very commonly discrete representations for signals are used.

For example we represent images with a grid of pixels, we represent shapes with maybe

a point cloud and we will use for example discrete samples for the amplitudes of a sound

wave to represent audio signals.

And recently neural implicit representations or coordinate based networks have emerged

as a new way to represent 3D shapes encoded as the zero level set of a signed distance

function.

So here a signal is parameterized continuously as a ReLU neural network that maps some XYZ

coordinate input to the signed distance or could be the occupancy at that XYZ location.

And this is an emerging representation that has a lot of benefits.

First it's agnostic to the specific grid resolution and the memory required to represent the

signal generally scales with the complexity of the signal independent of the spatial resolution

or some globally highest frequency in the signal.

And while ReLU networks are capable of representing simple objects such as the Stanford bunny,

they usually fail to encode complex or larger scale scenes with fine details such as this

room scale environment like we see here in this bottom image has a lot of artifacts.

And more generally we could ask whether these architectures are able to represent other

complex signals like images or audio signals.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:04 Min

Aufnahmedatum

2021-09-14

Hochgeladen am

2021-09-14 22:06:06

Sprache

en-US

It’s a great pleasure to welcome David B. Lindell and Julien Martel from Stanford Engineering to our Lab!

Abstract: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations and new capabilities in neural rendering and view synthesis. However, conventional network architectures for such implicit neural representations are incapable of modeling signals at scale with fine detail and fail to represent derivatives and integrals of signals. In this talk, we describe three recent approaches to solve these challenging problems. First, we introduce sinusoidal representation networks or SIREN, which are ideally suited for representing complex natural signals and their derivatives. Using SIREN, we can represent images, wavefields, video, sound, and their derivatives, allowing us to solve differential equations using this type of neural network. Second, we introduce a new framework for solving integral equations using implicit neural representation networks. Our automatic integration framework, AutoInt, enables the calculation of any definite integral with two evaluations of a neural network. This allows fast inference and rendering when applied to neural rendering techniques based on volume rendering.  Finally, we introduce a new architecture and method for scaling up implicit representations, called Adaptive Coordinate Networks (ACORN). The approach relies on a hybrid implicit–explicit representation and a learned, online multiscale decomposition of the target signal. We use ACORN to demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio (an 1000x increase in scale over previous experiments), and we reduce training times for 3D shape fitting from days to hours or minutes while improving memory requirements by over an order of magnitude.

Short Bios Julien Martel (http://www.jmartel.net/) is a Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab led by Gordon Wetzstein. His research interests are in unconventional visual sensing and processing. More specifically, his current topics of research include the co-design of hardware and algorithms for visual sensing, the design of methods for vision sensors with in-pixel computing capabilities, and the use of novel representations for visual data such as neural implicit representations.

David B. Lindell (https://davidlindell.com) is a Postdoctoral Scholar in the Department of Electrical Engineering at Stanford University. His research interests are in the areas of computational imaging, machine learning, and remote sensing. Most recently, he has worked on developing neural representations for applications in vision and rendering. He has also developed advanced 3D imaging systems to capture objects hidden around corners or through scattering media.

Register for more upcoming talks here!

References

SIREN: https://vsitzmann.github.io/siren/

ACORN: https://www.computationalimaging.org/publications/acorn/

AutoInt: https://www.computationalimaging.org/publications/automatic-integration/

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

 

Tags

beyond the patterns
Einbetten
Wordpress FAU Plugin
iFrame
Teilen